Improved Text Classification via Contrastive Adversarial Training

نویسندگان

چکیده

We propose a simple and general method to regularize the fine-tuning of Transformer-based encoders for text classification tasks. Specifically, during we generate adversarial examples by perturbing word embedding matrix model perform contrastive learning on clean in order teach learn noise-invariant representations. By training both along with additional objective, observe consistent improvement over standard examples. On several GLUE benchmark tasks, our fine-tuned Bert_Large outperforms baseline 1.7% average, Roberta_Large improves 1.3%. additionally validate different domains using three intent datasets, where 1-2% average. For challenging low-resource scenario, train system half data (per intent) each achieve similar performance compared trained full data.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Generating Text via Adversarial Training

Generative Adversarial Networks (GANs) have achieved great success in generating 1 realistic synthetic real-valued data. However, the discrete output of language model 2 hinders the application of gradient-based GANs. In this paper we propose a generic 3 framework employing Long short-term Memory (LSTM) and convolutional neural 4 network (CNN) for adversarial training to generate realistic text...

متن کامل

Virtual Adversarial Training for Semi-Supervised Text Classification

Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We...

متن کامل

Adversarial Training Methods for Semi-supervised Text Classification

Adversarial training provides a means of regularizing supervised learning algorithms while virtual adversarial training is able to extend supervised learning algorithms to the semi-supervised setting. However, both methods require making small perturbations to numerous entries of the input vector, which is inappropriate for sparse high-dimensional inputs such as one-hot word representations. We...

متن کامل

Text Generation using Generative Adversarial Training

Generative models reduce the need of acquiring laborious labeling for the dataset. Text generation techniques can be applied for improving language models, machine translation, summarization, and captioning. This project experiments on different recurrent neural network models to build generative adversarial networks for generating texts from noise. The trained generator is capable of producing...

متن کامل

Adversarial Multi-task Learning for Text Classification

Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task lear...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2022

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v36i10.21362